Cross-National Measures of the Intensity of COVID-19 Public Health Policies

Robert Kubinec (et al.)

Introduction

  • While many studies about COVID-19 policies published (Phillips, Zhang, and Petherick 2021; Hale et al. 2021; Flaxman et al. 2020; Haug et al. 2020), we still don’t know all that much about how effective policies are in containing the virus.

  • A lot of the problem, we argue has to do with measurement, a necessary prior step to causal inference.

  • In this paper, we present new COVID-19 policy intensity scores that are robust to measurement error and permit accurate interpretation of the effect of COVID-19 policies on other outcomes.

Every Scientist on March 16th, 2020

Lockdowns Shmockdowns

  • For example, lockdowns are often seen as a singular COVID-19 restriction. But they are actually a combination of different policies:

    • Restrictions on movement of various kinds.

      • Can you leave your house?

      • Can you travel to stores?

      • What about emergency travel?

    • Restrictions on gathering

      • Can you meet with family members?

      • Coworkers?

      • Recreational activities?

Not Understanding Measurement Has Consequences

  • We can use simple binary classifications for policies, i.e., 1 if any policy imposed.

  • Policies are often highly correlated because they a part of an over-arching policy framework.

  • The Flaxman et al. (2020) study had severe problems with this—reclassifying lockdowns or moving the dates could switch the weight of different policy indicators dramatically .

Poor Measurement Results in False Inferences

  • The inference problem is similar to interpreting control variables in a regression model: we only see the direct effects, which can be very misleading (Hünermund and Louw, n.d.; Keele, Stevenson, and Elwert 2020).

  • These issues can be seen in existing work as policies have implausible reported associations such as lockdowns increasing COVID-19 infections.

Example: Sharma et al. (2021) (Nature Communications)

  • Paper used a very informative prior on the effect of COVID-19 policies on reducing infections:

    • “our prior places 80% of its mass on positive effects, reflecting a belief that NPIs are more likely to reduce transmission than to increase it.”

    • Results without this prior show positive associations between lockdowns and COVID-19.

Example: Sharma et al. (2021) (Nature Communications)

Example: Haug et al. (2020) (Nature Human Behavior)

Resolving the Issue

  • We can fix this measurement issue by isolating the variation in disparate policy records that is directed at a given policy domain (business restrictions, social gatherings, masks, etc).

  • Assume each policymakers has an ideal point \(x_i\) for a certain policy domain that will maximize their utility by minimizing distance to policies \(Y_{ij}\) where each policy has a Yes \(Y_j\) and No \(N_j\) point in the ideal point space:

\[ U_i = \lVert x_{it} - N_j \rVert^2 - \lVert x_{it} - Y_j \rVert^2 \qquad(1)\]

DAG for a Given Policy Domain

Simulation DGP - Ideal Point Model

For each iteration, draw a value from the following distributions for each latent parameter–time-varying ideal points \(x_{it}\), policy yes points \(Y_j\), no points \(N_j\), and policymaker utility \(U_{itj}\):

\[\begin{align} x_{i(t=1)} &\sim \text{Normal}(0,1)\\ x_{it} &\sim \text{Normal}(x_{i(t-1)},0.25)\\ N_j &\sim \text{Exponential}(1)\\ Y_j &\sim N_j + \text{Exponential}(2)\\ U_{ijt} &= \lVert x_{it} - N_j \rVert^2 - \lVert x_{it} - Y_j \rVert^2 \end{align}\]

Draw Policy Indicators from Utilities

For each value of a latent parameter, draw the following observed data: discrete and continuous policies \(Y_{ij}\) and COVID cases \(I_{it}\) as a function of utilities \(U_{ijt}\) and ideal points \(x_{it}\):

\[\begin{align} Y_{i(j<6)} &\sim \text{Normal}(U_{ijt}, 0.5) \\ M_{i(j>5)} &\sim \text{Normal}(0, 0.5) \\ Y_{i(j>5)} &\sim \text{Bernoulli}(q(U_{ijt} + M_{ij}))\\ I_{it} & \sim \text{Binomial}(q(-10 - 2 x_{it}), 10000000) \end{align}\]

Estimate Utilities

  • We recover the policymaker ideal points \(\hat{x_i}\) using a Bayesian item-response theory model for each observed policy indicator \(Y_{ij}\):

\[ Pr(Y_{ij}) = g(\delta_j^{'} \hat{x_i} - \beta_j) \qquad(2)\]

Test for Observed Policy Effect on COVID Cases

Then we run regressions on COVID cases \(I_{it}\):

  1. Observed policy indicators \(Y_{ij}\):

\[ I_{it} = \alpha_1 + \beta_1 Y_{ij} \qquad(3)\]

  1. Estimated policy intensity scores/ideal points \(\hat{x_{it}}\):

\[ I_{it} = \alpha_2 + \beta_2 \hat{x_{it}} \qquad(4)\]

Which coefficient will match the simulation effect of ideal points on policy outcomes?

Results of Simulation

Data

  • To estimate these indices, we combine CoronaNet and Oxford datasets on COVID-19 restrictions.

  • We map dozens of indicators onto six policy domains: masking, health monitoring, health resources, school restrictions, business restrictions, and social distancing.

  • We estimate these using time-varying Bayesian item response models (idealstan) using the same specification as Equation 2.

  • Estimates of policy intensity/ideal points \(\hat{x_{it}}\) include uncertainty.

Discrimination Parameters

Discrimination Parameters

Time-Varying Scores

Regression Model: Policy Intensity and Contact

A very boring regression table:

More Info

Best way to get the latest version of our scores is with our R package CoronaNetR on CRAN:

Paper draft available here:

References

Flaxman, Seth, Swapnil Mishra, Axel Gandy, H. Juliette T. Unwin, Thomas A. Mellan, Helen Coupland, Charles Whittaker, et al. 2020. “Estimating the Effects of Non-Pharmaceutical Interventions on COVID-19 in Europe.” Nature 584 (7820): 257–61. https://doi.org/10.1038/s41586-020-2405-7.
Hale, Thomas, Noam Angrist, Rafael Goldszmidt, Beatriz Kira, Anna Petherick, Toby Phillips, Samuel Webster, et al. 2021. “A Global Panel Database of Pandemic Policies (Oxford COVID-19 Government Response Tracker).” Nature Human Behaviour, March, 1–10. https://doi.org/10.1038/s41562-021-01079-8.
Haug, Nils, Lukas Geyrhofer, Alessandro Londei, Elma Dervic, Amélie Desvars-Larrive, Vittorio Loreto, Beate Pinior, Stefan Thurner, and Peter Klimek. 2020. “Ranking the Effectiveness of Worldwide COVID-19 Government Interventions.” Nature Human Behaviour 4 (12): 1303–12. https://doi.org/10.1038/s41562-020-01009-0.
Hünermund, Paul, and Beyers Louw. n.d. “On the Nuisance of Control Variables in Regression Analysis.” https://doi.org/10.48550/arXiv.2005.10314.
Keele, Luke, Randolph T. Stevenson, and Felix Elwert. 2020. “The Causal Interpretation of Estimated Associations in Regression Models.” Political Science Research and Methods 8 (1): 1–13. https://doi.org/10.1017/psrm.2019.31.
Phillips, Toby, Yuxi Zhang, and Anna Petherick. 2021. “A Year of Living Distantly: Trends in the Use of Stay-at-Home Orders over the First 12 Months of the COVID-19 Pandemic.” Rochester, NY. https://papers.ssrn.com/abstract=3847818.
Sharma, Mrinank, Sören Mindermann, Charlie Rogers-Smith, Gavin Leech, Benedict Snodin, Janvi Ahuja, Jonas B. Sandbrink, et al. 2021. “Understanding the Effectiveness of Government Interventions Against the Resurgence of COVID-19 in Europe.” Nature Communications 12 (1): 5820. https://doi.org/10.1038/s41467-021-26013-4.